Cloud native requirements
Minimum requirements
You need a machine with the following files and packages to run the installation. You can run these from a laptop or separate VM and they do not need to be issued on the Kubernetes cluster itself.
Note For complete details on how to install Collibra DQ on Kubernetes with Docker containers, see Cloud native install.
Prerequisites
- Kubernetes cluster -- EKS, GKE, AKS, Openshift, Rancher
- Helm(v3)
- kubectl
- Cloud command line SDK, such as gcloud CLI, AWS CLI or similar
- External PostgreSQL DB version 11.9 and above, storage size 100GB, cores 4 to 8 memory to 4 to 8 GB
- Private container registry -- to store images
- LoadBalancer -- IngressController -- Ingress
- Egress networking access
- Helm Chart
- Images, image access key
- Minimum pod requirement -- 2 cores, 2GB RAM
- If you bring in your own Spark executor pod launch template, ensure that the service account used to launch Spark executor pods has the permission to do so. Refer to the executor launch template for more information.
Files
- The helm chart.
- JKS files with secrets created in kubectl:
- dq-ssl-secret
- dq-pull-secret*
- A spark-gcs-secret you create from your service account file or token.
Note * Available upon request from Collibra.
Supported Kubernetes version
As of the Collibra DQ 2024.10 version, Collibra DQ supports Kubernetes versions 1.21 - 1.30. Collibra DQ versions before 2024.10 support Kubernetes versions 1.21 - 1.29.
Application system requirements
Component | Processor | Memory | Storage |
---|---|---|---|
Collibra DQ Web | 1 core | 2 GB | 10 MB PVC |
DQ Agent | 1 core | 1 GB | 100 MB PVC |
DQ Metastore | 1 core | 2 GB | 10 GB PVC |
Spark* | 2 cores | 2 GB | - |
Note * This is the minimum quantity of resources required to run an a Spark job in Kubernetes. This amount of resources would only provide the ability to scan a few megabytes of data with no more than a single job running at a given time. Proper sizing of the compute space must take into account the largest dataset that may be scanned, as well as the desired concurrency.
Network service considerations
DQ Web is the only required component that needs to be directly accessed from outside of Kubernetes. History Server is the only other component that can be accessed directly by users, however, it is optional.
If the target Kubernetes platform supports a LoadBalancer service type, you can configure the Helm chart to directly deploy the externally accessible endpoint.
Note For testing purposes, you can also configure the Helm chart to deploy a NodePort service type.
For the Ingress service type, deploy Collibra DQ without an externally accessible service and then attach the Ingress service separately. This applies when you use a third-party Ingress controller such as NGINX, Contour, etc.
Note The Helm chart is able to deploy an Ingress on GKE and EKS platforms, however, there is a wide variety of possible Ingress configurations that have not been tested.
Obtaining credentials
Kubernetes stores credentials in the form of secrets. Secrets are base64 encoded files that you can mount into application containers and that application components can reference at runtime. You use pull secrets to access secured container registries to obtain application containers.
Note Deploying containers directly from the Collibra image repository is not recommended. You should only access the Collibra image registry for the initial download and validation of Docker images. After this, you should upload and store images to your private registry to provide you control over when the images are updated and eliminate any operational dependencies on Collibra's repository.
SSL certificates
To enable SSL for secure access to DQ Web, a keystore that contains a signed certificate, keychain, and private key is required. This keystore must be available in the target namespace before you deploy Collibra DQ.
Note By default, Collibra DQ looks for a secret called dq-ssl-secret
to find the keystore.
Note Although it is possible to deploy with SSL disabled, is not recommended.
Cloud storage credentials
If you enable History Server, a distributed filesystem is required. Currently, Collibra DQ supports S3 and GCS for Spark history log storage.
Note Azure Blob and HDFS on the near term roadmap.
Target storage system | Credentials requirements |
---|---|
S3 | An IAM Role with access to the target bucket needs to be attached to the Kubernetes nodes of the namespace where Collibra DQ is being deployed. |
GCS | You must create a secret from the JSON key file of a service account with access to the log bucket. The secret must be available in the namespace before you deploy Collibra DQ. By default, Collibra DQ looks for a secret called spark-gcs-secret , if GCS is enabled for Spark history logs. You can change this via a helm chart argument. |
Container pull secret
Collibra Data Quality & Observability containers are stored in a secured repository in Google Container Registry. For Collibra DQ to successfully pull the containers when deployed, a pull secret with access to the container registry must be available in the target namespace.
Note By default, Collibra DQ looks for a pull secret named dq-pull-secret
. You can change this via a helm chart argument.
Spark service account
To enable DQ Agent and the Spark driver to create and destroy compute containers, you must have a service account with a role that allows get/list/create/delete operations on pods/services/secrets/configMaps in the target namespace. By default, Collibra DQ attempts to create the required service account and the required RoleBinding to the default Edit role. Edit is a role that is generally available in a Kubernetes cluster. If the Edit role is not available, you can manually create it.
Accessing the platform
To deploy anything to a Kubernetes cluster, the first step is to install the required client utilities and configure access:
- kubectl: The main method of communication with a Kubernetes cluster. All configuration or introspection tasks will be preformed using kubectl.
- helm v3: Used to deploy the Collibra DQ Helm chart without hand coding manifests.
After you install the utilities, the next step is to configure a kube-context that points to and authenticates to the target platform. On cloud platforms like GKE and EKS, this process is completely automated through their respective CLI utilities.
aws eks --region <region-code> update-kubeconfig --name <cluster_name>
gcloud container clusters get-credentials <cluster-name>
In private clouds, this process will vary from organization to organization, however, the platform infrastructure team should be able to provide the target kube-context entry.
Preparing secrets
Once access to the target platform is confirmed, you can begin the preparation of the namespace. Typically the namespace that Collibra DQ is going to be deployed into is pre-allocated by the platform team.
kubectl create namespace <namespace>
Note There is a lot more that can go into namespace creation such as resource quota allocation, but that is generally a task for the platform team.
Create an SSL keystore secret
Note For complete details on how to install Collibra DQ on Kubernetes with Docker containers, see Cloud native install.
Create a container pull secret
Note For complete details on how to install Collibra DQ on Kubernetes with Docker containers, see Cloud native install.
JSON key file credential
kubectl create secret docker-registry dq-pull-secret \
--docker-server=<cdq-registry-server> \
--docker-username=_json_key \
--docker-email=<service-account-email> \
--docker-password="$(cat /path/to/key.json)" \
--namespace <namespace>
Short lived access token
kubectl create secret docker-registry dq-pull-secret \
--docker-server=<cdq-registry-server> \
--docker-username=oauth3accesstoken \
--docker-email=<service-account-email> \
--docker-password="<access-token-text>" \
--namespace <namespace>
Warning GCP Oauth tokens are usually only good for 1 hour. This type of credential is excellent if the goal is to pull containers into a private registry. It can be used as the pull secret to access containers directly, however, the secret would have to be recreated with a fresh token before restarting any of the Collibra DQ components.
Create a GCS credential secret
kubectl create secret generic spark-gcs-secret \
--from-file /path/to/keystore.jks \
--namespace <namespace>
Warning The file name that you use in the --from-file
argument should be spark-gcs-secret. If the file name is anything else, you must include an additional argument specifying the gcs secret name in the Helm command.
Note For complete details on how to install Collibra DQ on Kubernetes with Docker containers, see Cloud native install.